142 research outputs found

    An image retrieval system based on explicit and implicit feedback on a tablet computer

    Get PDF
    Our research aims at developing a image retrieval system which uses relevance feedback to build a hybrid search /recommendation system for images according to users’ inter ests. An image retrieval application running on a tablet computer gathers explicit feedback through the touchscreen but also uses multiple sensing technologies to gather implicit feedback such as emotion and action. A recommendation mechanism driven by collaborative filtering is implemented to verify our interaction design

    FastSal: a Computationally Efficient Network for Visual Saliency Prediction

    Get PDF
    This paper focuses on the problem of visual saliency prediction, predicting regions of an image that tend to attract human visual attention, under a constrained computational budget. We modify and test various recent efficient convolutional neural network architectures like EfficientNet and MobileNetV2 and compare them with existing state-of-the-art saliency models such as SalGAN and DeepGaze II both in terms of standard accuracy metrics like AUC and NSS, and in terms of the computational complexity and model size. We find that MobileNetV2 makes an excellent backbone for a visual saliency model and can be effective even without a complex decoder. We also show that knowledge transfer from a more computationally expensive model like DeepGaze II can be achieved via pseudo-labelling an unlabelled dataset, and that this approach gives result on-par with many state-of-the-art algorithms with a fraction of the computational cost and model size. Source code is available at https://github.com/feiyanhu/FastSal

    Periodicity detection and its application in lifelog data

    Get PDF
    Wearable sensors are catching our attention not only in industry but also in the market. We can now acquire sensor data from different types of health tracking devices like smart watches, smart bands, lifelog cameras and most smart phones are capable of tracking and logging information using built-in sensors. As data is generated and collected from various sources constantly, researchers have focused on interpreting and understanding the semantics of this longitudinal multi-modal data. One challenge is the fusion of multi-modal data and achieving good performance on tasks such activity recognition, event detection and event segmentation. The classical approach to process the data generated by wearable sensors has three main parts: 1) Event segmentation 2) Event recognition 3) Event retrieval. Many papers have been published in each of the three fields. This thesis has focused on the longitudinal aspect of the data from wearable sensors, instead of concentrating on the data over a short period of time. The following aspects are several key research questions in the thesis. Does longitudinal sensor data have unique features than can distinguish the subject generating the data from other subjects ? In other words, from the longitudinal perspective, does the data from different subjects share more common structure/similarity/identical patterns so that it is difficult to identify a subject using the data. If this is the case, what are those common patterns ? If we are able to eliminate those similarities among all the data, does the data show more specific features that we can use to model the data series and predict the future values ? If there are repeating patterns in longitudinal data, we can use different methods to compute the periodicity of the recurring patterns and furthermore to identify and extract those patterns. Following that we could be able to compare local data over a short time period with more global patterns in order to show the regularity of the local data. Some case studies are included in the thesis to show the value of longitudinal lifelog data related to a correlation of health conditions and training performance

    Periodicity detection in lifelog data with missing and irregularly sampled data

    Get PDF
    Lifelogging is the ambient, continuous digital recording of a person’s everyday activities for a variety of possible applications. Much of the work to date in lifelogging has focused on developing sensors, capturing information, processing it into events and then supporting event-based access to the lifelog for applications like memory recall, behaviour analysis or similar. With the recent arrival of aggregating platforms such as Apple’s HealthKit, Microsoft’s HealthVault and Google’s Fit, we are now able to collect and aggregate data from lifelog sensors, to centralize the management of data and in particular to search for and detect patterns of usage for individuals and across populations. In this paper, we present a framework that detects both lowlevel and high-level periodicity in lifelog data, detecting hidden patterns of which users would not otherwise be aware. We detect periodicities of time series using a combination of correlograms and periodograms, using various signal processing algorithms. Periodicity detection in lifelogs is particularly challenging because the lifelog data itself is not always continuous and can have gaps as users may use their lifelog devices intermittingly. To illustrate that periodicity can be detected from such data, we apply periodicity detection on three lifelog datasets with varying levels of completeness and accuracy

    Implications of Rewards and Punishments for Content Generations by Key Opinion Leaders

    Get PDF
    Nowadays, e-commerce platforms have increasingly relied on contents generated by key opinion leaders to engage customers and drive product sales. To stay on top of the growth, e-commerce content platforms have introduced rewards and punishments policies to ensure content quality. However, effectiveness has remained less clear. Besides, there is a dearth of research that focuses on such performance-based output control in the extant platform governance and user-generated content (UGC) literature. In this study, based on the reinforcement theory and UGC literature, we investigate the effects of monetary rewards and punishments on the quantity and quality of contents generated by KOLs in the e-commerce content platform context. Using data collected from JD WeChat Shopping Circle, we empirically testified our hypotheses. Our results indicate that punishments significantly increase the quantity and quality of content generated by KOLs. Monetary rewards only have significantly positive effects on the quality of KOLs\u27 generated content. Nevertheless, the magnitude of the effects of monetary rewards is larger compared with that of punishments. Theoretical and practical implications are discussed

    FastSal: a computationally efficient network for visual saliency prediction

    Get PDF
    This paper focuses on the problem of visual saliency prediction, predicting regions of an image that tend to attract hu- man visual attention, under a constrained computational budget. We modify and test various recent efficient convolutional neural network architectures like EfficientNet and MobileNetV2 and compare them with existing state-of-the-art saliency models such as SalGAN and DeepGaze II both in terms of standard accuracy metrics like Area Under Curve (AUC) and Normalized Scanpath Saliency (NSS), and in terms of the computational complexity and model size. We find that MobileNetV2 makes an excellent backbone for a visual saliency model and can be effective even without a complex decoder. We also show that knowledge transfer from a more computationally expensive model like DeepGaze II can be achieved via pseudo-labelling an unlabelled dataset, and that this approach gives result on-par with many state-of-the-art algorithms with a fraction of the computational cost and model size

    Pattern detection in lifelog data

    Get PDF
    Lifelogging technology is getting attention from industry, academic and market, such as wearable sensors and the concept of smart home. We are proposing a framework which could handle those aggregated multimodal and longitudinal data. The system will take advantage of the rich information carried chronologically and implement process such as data cleaning, low and high level patterns detection and giving feedback to users

    Formulating queries for collecting training examples in visual concept classification

    Get PDF
    Video content can be automatically analysed and indexed using trained classifiers which map low-level features to semantic concepts. Such classifiers need training data consisting of sets of images which contain such concepts and recently it has been discovered that such training data can be located using text-based search to image databases on the internet. Formulating the text queries which locate these training images is the challenge we address here. In this paper we present preliminary results on TRECVid data of concept classification using automatically crawled images as training data and we compare the results with those obtained from manually annotated training sets

    Using periodicity intensity to detect long term behaviour change

    Get PDF
    This paper introduces a new way to analyse and visualize quantified-self or lifelog data captured from any lifelogging device over an extended period of time. The mechanism works on the raw, unstructured lifelog data by detecting periodicities, those repeating patters that occur within our lifestyles at different frequencies including daily, weekly, seasonal, etc. Focusing on the 24 hour cycle, we calculate the strength of the 24-hour periodicity at 24-hour intervals over an extended period of a lifelog. Changes in this strength of the 24-hour cycle can illustrate changes or shifts in underlying human behavior. We have performed this analysis on several lifelog datasets of durations from several weeks to almost a decade, from recordings of training distances to sleep data. In this paper we use 24 hour accelerometer data to illustrate the technique, showing how changes in human behavior can be identified

    Periodicity intensity reveals insights into time series data: three use cases

    Get PDF
    Periodic phenomena are oscillating signals found in many naturally-occurring time series. A periodogram can be used to measure the intensities of oscillations at different frequencies over an entire time series but sometimes we are interested in measuring how periodicity intensity at a specific frequency varies throughout the time series. This can be done by calculating periodicity intensity within a window then sliding and recalculating the intensity for the window, giving an indication of how periodicity intensity at a specific frequency changes throughout the series. We illustrate three applications of this the first of which is movements of a herd of new-born calves where we show how intensity of the 24h periodicity increases and decreases synchronously across the herd. We also show how changes in 24h periodicity intensity of activities detected from in-home sensors can be indicative of overall wellness. We illustrate this on several weeks of sensor data gathered from each of the homes of 23 older adults. Our third application is the intensity of 7-day periodicity of hundreds of University students accessing online resources from a virtual learning environment (VLE) and how the regularity of their weekly learning behaviours changes throughout a teaching semester. The paper demonstrates how periodicity intensity reveals insights into time series data not visible using other forms of analysi
    • …
    corecore